5,737 research outputs found
Zigzag Codes: MDS Array Codes with Optimal Rebuilding
MDS array codes are widely used in storage systems to protect data against
erasures. We address the \emph{rebuilding ratio} problem, namely, in the case
of erasures, what is the fraction of the remaining information that needs to be
accessed in order to rebuild \emph{exactly} the lost information? It is clear
that when the number of erasures equals the maximum number of erasures that an
MDS code can correct then the rebuilding ratio is 1 (access all the remaining
information). However, the interesting and more practical case is when the
number of erasures is smaller than the erasure correcting capability of the
code. For example, consider an MDS code that can correct two erasures: What is
the smallest amount of information that one needs to access in order to correct
a single erasure? Previous work showed that the rebuilding ratio is bounded
between 1/2 and 3/4, however, the exact value was left as an open problem. In
this paper, we solve this open problem and prove that for the case of a single
erasure with a 2-erasure correcting code, the rebuilding ratio is 1/2. In
general, we construct a new family of -erasure correcting MDS array codes
that has optimal rebuilding ratio of in the case of erasures,
. Our array codes have efficient encoding and decoding
algorithms (for the case they use a finite field of size 3) and an
optimal update property.Comment: 23 pages, 5 figures, submitted to IEEE transactions on information
theor
Supervised nonlinear spectral unmixing using a post-nonlinear mixing model for hyperspectral imagery
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data
Sum-Rate Maximization in Two-Way AF MIMO Relaying: Polynomial Time Solutions to a Class of DC Programming Problems
Sum-rate maximization in two-way amplify-and-forward (AF) multiple-input
multiple-output (MIMO) relaying belongs to the class of difference-of-convex
functions (DC) programming problems. DC programming problems occur as well in
other signal processing applications and are typically solved using different
modifications of the branch-and-bound method. This method, however, does not
have any polynomial time complexity guarantees. In this paper, we show that a
class of DC programming problems, to which the sum-rate maximization in two-way
MIMO relaying belongs, can be solved very efficiently in polynomial time, and
develop two algorithms. The objective function of the problem is represented as
a product of quadratic ratios and parameterized so that its convex part (versus
the concave part) contains only one (or two) optimization variables. One of the
algorithms is called POlynomial-Time DC (POTDC) and is based on semi-definite
programming (SDP) relaxation, linearization, and an iterative search over a
single parameter. The other algorithm is called RAte-maximization via
Generalized EigenvectorS (RAGES) and is based on the generalized eigenvectors
method and an iterative search over two (or one, in its approximate version)
optimization variables. We also derive an upper-bound for the optimal values of
the corresponding optimization problem and show by simulations that this
upper-bound can be achieved by both algorithms. The proposed methods for
maximizing the sum-rate in the two-way AF MIMO relaying system are shown to be
superior to other state-of-the-art algorithms.Comment: 35 pages, 10 figures, Submitted to the IEEE Trans. Signal Processing
in Nov. 201
Group-Lasso on Splines for Spectrum Cartography
The unceasing demand for continuous situational awareness calls for
innovative and large-scale signal processing algorithms, complemented by
collaborative and adaptive sensing platforms to accomplish the objectives of
layered sensing and control. Towards this goal, the present paper develops a
spline-based approach to field estimation, which relies on a basis expansion
model of the field of interest. The model entails known bases, weighted by
generic functions estimated from the field's noisy samples. A novel field
estimator is developed based on a regularized variational least-squares (LS)
criterion that yields finitely-parameterized (function) estimates spanned by
thin-plate splines. Robustness considerations motivate well the adoption of an
overcomplete set of (possibly overlapping) basis functions, while a sparsifying
regularizer augmenting the LS cost endows the estimator with the ability to
select a few of these bases that ``better'' explain the data. This parsimonious
field representation becomes possible, because the sparsity-aware spline-based
method of this paper induces a group-Lasso estimator for the coefficients of
the thin-plate spline expansions per basis. A distributed algorithm is also
developed to obtain the group-Lasso estimator using a network of wireless
sensors, or, using multiple processors to balance the load of a single
computational unit. The novel spline-based approach is motivated by a spectrum
cartography application, in which a set of sensing cognitive radios collaborate
to estimate the distribution of RF power in space and frequency. Simulated
tests corroborate that the estimated power spectrum density atlas yields the
desired RF state awareness, since the maps reveal spatial locations where idle
frequency bands can be reused for transmission, even when fading and shadowing
effects are pronounced.Comment: Submitted to IEEE Transactions on Signal Processin
Diffusion Adaptation over Networks under Imperfect Information Exchange and Non-stationary Data
Adaptive networks rely on in-network and collaborative processing among
distributed agents to deliver enhanced performance in estimation and inference
tasks. Information is exchanged among the nodes, usually over noisy links. The
combination weights that are used by the nodes to fuse information from their
neighbors play a critical role in influencing the adaptation and tracking
abilities of the network. This paper first investigates the mean-square
performance of general adaptive diffusion algorithms in the presence of various
sources of imperfect information exchanges, quantization errors, and model
non-stationarities. Among other results, the analysis reveals that link noise
over the regression data modifies the dynamics of the network evolution in a
distinct way, and leads to biased estimates in steady-state. The analysis also
reveals how the network mean-square performance is dependent on the combination
weights. We use these observations to show how the combination weights can be
optimized and adapted. Simulation results illustrate the theoretical findings
and match well with theory.Comment: 36 pages, 7 figures, to appear in IEEE Transactions on Signal
Processing, June 201
Quantization of Prior Probabilities for Hypothesis Testing
Bayesian hypothesis testing is investigated when the prior probabilities of
the hypotheses, taken as a random vector, are quantized. Nearest neighbor and
centroid conditions are derived using mean Bayes risk error as a distortion
measure for quantization. A high-resolution approximation to the
distortion-rate function is also obtained. Human decision making in segregated
populations is studied assuming Bayesian hypothesis testing with quantized
priors
Incremental Relaying for the Gaussian Interference Channel with a Degraded Broadcasting Relay
This paper studies incremental relay strategies for a two-user Gaussian
relay-interference channel with an in-band-reception and
out-of-band-transmission relay, where the link between the relay and the two
receivers is modelled as a degraded broadcast channel. It is shown that
generalized hash-and-forward (GHF) can achieve the capacity region of this
channel to within a constant number of bits in a certain weak relay regime,
where the transmitter-to-relay link gains are not unboundedly stronger than the
interference links between the transmitters and the receivers. The GHF relaying
strategy is ideally suited for the broadcasting relay because it can be
implemented in an incremental fashion, i.e., the relay message to one receiver
is a degraded version of the message to the other receiver. A
generalized-degree-of-freedom (GDoF) analysis in the high signal-to-noise ratio
(SNR) regime reveals that in the symmetric channel setting, each common relay
bit can improve the sum rate roughly by either one bit or two bits
asymptotically depending on the operating regime, and the rate gain can be
interpreted as coming solely from the improvement of the common message rates,
or alternatively in the very weak interference regime as solely coming from the
rate improvement of the private messages. Further, this paper studies an
asymmetric case in which the relay has only a single single link to one of the
destinations. It is shown that with only one relay-destination link, the
approximate capacity region can be established for a larger regime of channel
parameters. Further, from a GDoF point of view, the sum-capacity gain due to
the relay can now be thought as coming from either signal relaying only, or
interference forwarding only.Comment: To appear in IEEE Trans. on Inf. Theor
A Game-Theoretic View of the Interference Channel: Impact of Coordination and Bargaining
This work considers coordination and bargaining between two selfish users
over a Gaussian interference channel. The usual information theoretic approach
assumes full cooperation among users for codebook and rate selection. In the
scenario investigated here, each user is willing to coordinate its actions only
when an incentive exists and benefits of cooperation are fairly allocated. The
users are first allowed to negotiate for the use of a simple Han-Kobayashi type
scheme with fixed power split. Conditions for which users have incentives to
cooperate are identified. Then, two different approaches are used to solve the
associated bargaining problem. First, the Nash Bargaining Solution (NBS) is
used as a tool to get fair information rates and the operating point is
obtained as a result of an optimization problem. Next, a dynamic
alternating-offer bargaining game (AOBG) from bargaining theory is introduced
to model the bargaining process and the rates resulting from negotiation are
characterized. The relationship between the NBS and the equilibrium outcome of
the AOBG is studied and factors that may affect the bargaining outcome are
discussed. Finally, under certain high signal-to-noise ratio regimes, the
bargaining problem for the generalized degrees of freedom is studied.Comment: 43 pages, 11 figures, to appear on Special Issue of the IEEE
Transactions on Information Theory on Interference Networks, 201
Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing
In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
- âŠ